Intuitive Explanation of Non-stationary Gaussian Process Kernels

Background

What are GPs

Gaussian Processes (GP) are powerful supervised learning methods, designed to solve classification and regression problems. One of the major advantages of using Gaussian processes is that it can estimate the uncertainty of its predictions by describing the probability distributions of the potentially infinite functions that fit the data. It can be defined as a stochastic process of random variables following a Gaussian distribution with a mean and a covariance function.

Görtler, et al., provides an excellent visual exploration of Gaussian Processes with mathematical intuition as well as a deeper understanding of how they work. In this article, we will cover some basic minimal concepts that help us set up a foundation for understanding Gaussian processes and extend it to assess its performance over regression problems.

Kernels

A kernel (also called the covariance function) describes covariance, i.e, the joint variability of the Gaussian process random variables and is essential in setting up prior information on the GP distribution. These covariance functions make the core of the GP models. Radial basis function (RBF) kernel (also known as Gaussian kernel) is a popularly used covariance function in GP modelling.

\begin{align} K_{rbf}(\mathbf{x}_i, \mathbf{x}_j) &= \sigma^2 \exp\left(\frac{||\mathbf{x}_i - \mathbf{x}_j||_2^2}{2l^2}\right)\\ \end{align}

There are a variety of kernels that can be used to model different desired shapes of the fitting functions. We also discuss two broad categories of kernels, stationary and non-stationary in Section ??, and also compare their performances on standard datasets. Following parameters in the kernel function play a significant role in the modelling of a GP:

In case of regression problems, these parameters are learnt using the training data by minimizing the following negative log marginal likelihood (nlml) function.

\begin{align} \log p(\mathbf{y}|X) &= -\frac{1}{2}y^T(K+\sigma^2_n I)^{-1}\mathbf{y} - \frac{1}{2}\log|K+\sigma^2_n I|-\frac{n}{2}\log2\pi\\ K &= \text{covariance_function}(X, X)\\ \sigma_n^2 &= \text{likelihood noise variance}\\ n &= \text{cardinality of } X \text{ or } \mathbf{y} \end{align}

Now, let us visualize standatd (stationary) GPs applied on some standard datasets

Stationary GP on noisy sine curve dataset

Notice that noisy sine data is having uniform noise over the entire input region. We can also see that smoothness of sine function remains similar for any value of input $X$.

Now, we show the same model fit over a bit more comprex data.

There are two similarities between noisy sine curve dataset and noisy complex dataset: i) noise is data-points is uniform across $X$; ii) Underlying function that generates the dataset seems equally smooth (stationary) across $X$.

In real word, it is completely possible that datasets may not follow one or more of the above properties. Now, we will show the performance of stationary GPs on a real-world dataset

Stationary GP on Olympic marathon dataset

Olympic Marathon dataset includes gold medal times for Olympic Marathon since 1896 to 2020. One of the noticable point about this dataset is that, in 1904, Marathon was badly organised leading to very slow times.

Let us see how standard GP performs over this dataset.